211 research outputs found

    Nonparametric bootstrapping of the reliability function for multiple copies of a repairable item modeled by a birth process

    Get PDF
    Nonparametric bootstrap inference is developed for the reliability function estimated from censored, nonstationary failure time data for multiple copies of repairable items. We assume that each copy has a known, but not necessarily the same, observation period; and upon failure of one copy, design modifications are implemented for all copies operating at that time to prevent further failures arising from the same fault. This implies that, at any point in time, all operating copies will contain the same set of faults. Failures are modeled as a birth process because there is a reduction in the rate of occurrence at each failure. The data structure comprises a mix of deterministic and random censoring mechanisms corresponding to the known observation period of the copy, and the random censoring time of each fault. Hence, bootstrap confidence intervals and regions for the reliability function measure the length of time a fault can remain within the item until realization as failure in one of the copies. Explicit formulae derived for the re-sampling probabilities greatly reduce dependency on Monte-Carlo simulation. Investigations show a small bias arising in re-sampling that can be quantified and corrected. The variability generated by the re-sampling approach approximates the variability in the underlying birth process, and so supports appropriate inference. An illustrative example describes application to a problem, and discusses the validity of modeling assumptions within industrial practice

    Confidence intervals for reliability growth models with small sample sizes

    Get PDF
    Fully Bayesian approaches to analysis can be overly ambitious where there exist realistic limitations on the ability of experts to provide prior distributions for all relevant parameters. This research was motivated by situations where expert judgement exists to support the development of prior distributions describing the number of faults potentially inherent within a design but could not support useful descriptions of the rate at which they would be detected during a reliability-growth test. This paper develops inference properties for a reliability-growth model. The approach assumes a prior distribution for the ultimate number of faults that would be exposed if testing were to continue ad infinitum, but estimates the parameters of the intensity function empirically. A fixed-point iteration procedure to obtain the maximum likelihood estimate is investigated for bias and conditions of existence. The main purpose of this model is to support inference in situations where failure data are few. A procedure for providing statistical confidence intervals is investigated and shown to be suitable for small sample sizes. An application of these techniques is illustrated by an example

    Point process model for reliability analysis of evolutionary designs

    Get PDF
    This paper from the 3rd International Conference on Mathematical Methods in Reliability discusses the point process model for reliability analysis of evolutionary designs

    Multivariate reliability modelling with empirical Bayes inference

    Get PDF
    Recent developments in technology permit detailed descriptions of system performance to be collected and stored. Consequently, more data are available about the occurrence, or non-occurrence, of events across a range of classes through time. Typically this implies that reliability analysis has more information about the exposure history of a system within different classes of events. For highly reliable systems, there may be relatively few failure events. Thus there is a need to develop statistical inference to support reliability estimation when there is a low ratio of failures relative to event classes. In this paper we show how Empirical Bayes methods can be used to estimate a multivariate reliability function for a system by modelling the vector of times to realise each failure root cause

    Prediction intervals for reliability growth models with small sample sizes

    Get PDF
    Engineers and practitioners contribute to society through their ability to apply basic scientific principles to real problems in an effective and efficient manner. They must collect data to test their products every day as part of the design and testing process and also after the product or process has been rolled out to monitor its effectiveness. Model building, data collection, data analysis and data interpretation form the core of sound engineering practice.After the data has been gathered the engineer must be able to sift them and interpret them correctly so that meaning can be exposed from a mass of undifferentiated numbers or facts. To do this he or she must be familiar with the fundamental concepts of correlation, uncertainty, variability and risk in the face of uncertainty. In today's global and highly competitive environment, continuous improvement in the processes and products of any field of engineering is essential for survival. Many organisations have shown that the first step to continuous improvement is to integrate the widespread use of statistics and basic data analysis into the manufacturing development process as well as into the day-to-day business decisions taken in regard to engineering processes.The Springer Handbook of Engineering Statistics gathers together the full range of statistical techniques required by engineers from all fields to gain sensible statistical feedback on how their processes or products are functioning and to give them realistic predictions of how these could be improved

    Empirical bayes estimates of development reliability for one shot devices

    Get PDF
    This article describes a method for estimating the reliability of a system under development that is an evolution of previous designs. We present an approach to making effective use of heritage data from similar operational systems to estimate reliability of a design that is yet to realise any data. The approach also has a mechanism to adjust initial estimates in the light of sparse data that becomes available in early stages of test. While the estimation approach, known as empirical Bayes is generic, we focus on one shot devices as this was the type of system which provided the practical motivation for this work and for which we illustrate an application

    Elicitation of structured engineering judgement to inform a focussed FMEA

    Get PDF
    The practical use of Failure Mode and Effects Analysis (FMEA) has been criticised because it is often implemented too late and in a manner that does not allow information to be fed-back to inform the product design. Lessons learnt from the use of elicitation methods to gather structured expert judgement about engineering concerns for a new product design has led to an enhancement of the approach for implementing design and process FMEA. We refer to this variant as a focussed FMEA since the goal is to enable relevant engineers to contribute to the analysis and to act upon the outcomes in such a way that all activities focus upon the design needs. The paper begins with a review of the proposed process to identify and quantify engineering concerns. The pros and cons of using elicitation methods, originally designed to support construction of a Bayesian prior, to inform a focussed FMEA are analysed and a comparison of the proposed process in relation to the existing standards is made. An industrial example is presented to illustrate customisation of the process and discuss the impact on the design process

    The safety case and the lessons learned for the reliability and maintainability case

    Get PDF
    This paper examine the safety case and the lessons learned for the reliability and maintainability case

    Optimal scheduling of reliability development activities

    Get PDF
    Probabilistic Safety Assessment and Management is a collection of papers presented at the PSAM 7 - ESREL '04 Conference in June 2004. The joint Conference provided a forum for the presentation of the latest developments in methodology and application of probabilistic and reliability methods in various industries. Innovations in methodology as well as practical applications in the areas of probabilistic safety assessment and of reliability analysis are presented in this six volume set. The aim of these applications is the optimisation of technological systems and processes from the perspective of a risk-informed safety management while also taking economic and environmental aspects into account. The joint Conference in particular achieved an enhanced communication, the sharing of experience and integration of approaches not only among the various industries but also on a truly global basis by bringing together leading experts from all over the world. Over the last four decades, contemporary researchers have continuously been working to provide modern societies with a systematic, self-consistent and coherent framework for making decisions on at least one class of risks, those stemming from modern technological applications. Most of the effort has been spent in developing methods and techniques for assessing the dependability of technological systems, and assessing or estimating the levels of safety and associated risks. A wide spectrum of engineering, natural and economic sciences has been involved in this assessment effort. The developments have moved beyond research endeavours, they have been applied and utilised in real socio-technical environments and have become established - while modern technology continues to present new challenges and to raise new questions. Consequently, Probabilistic Safety Assessment and Management covers both well-established practices and open issues in the fields addressed by the Conference, identifying areas where maturity has been reached and those where more development is needed. The papers reflect a wide variety of disciplines, such as principles and theory of reliability and risk analysis, systems modelling and simulation, consequence assessment, human and organisational factors, structural reliability methods, software reliability and safety, insights and lessons from risk studies and management/decision making. A diverse range of application areas are represented including aviation and space, chemical processing, civil engineering, energy, environment, information technology, legal, manufacturing, health care, defence, transportation and waste management

    Empirical Bayes methodology for estimating equipment failure rates with application to power generation plants

    Get PDF
    Many reliability databases pool event data for equipment across different plants. Pooling may occur both within and between organizations with the intention of sharing data across common items within similar operating environments to provide better estimates of reliability and availability. Frequentist estimation methods can be poor when few, or no, events occur even when equipment operate for long periods. An alternative approach based upon empirical Bayes estimation is proposed. The new method is applied to failure data analysis in power generation plants and found to provide credible insights. A statistical comparison between the proposed and frequentist methods shows that empirical Bayes is capable of generating more accurate estimates
    • 

    corecore